Goto

Collaborating Authors

 explicability score


Explicable Policy Search

Neural Information Processing Systems

Human teammates often form conscious and subconscious expectations of each other during interaction. Teaming success is contingent on whether such expectations can be met. Similarly, for an intelligent agent to operate beside a human, it must consider the human's expectation of its behavior. Disregarding such expectations can lead to the loss of trust and degraded team performance. A key challenge here is that the human's expectation may not align with the agent's


Generating Active Explicable Plans in Human-Robot Teaming

Hanni, Akkamahadevi, Zhang, Yu

arXiv.org Artificial Intelligence

Intelligent robots are redefining a multitude of critical domains but are still far from being fully capable of assisting human peers in day-to-day tasks. An important requirement of collaboration is for each teammate to maintain and respect an understanding of the others' expectations of itself. Lack of which may lead to serious issues such as loose coordination between teammates, reduced situation awareness, and ultimately teaming failures. Hence, it is important for robots to behave explicably by meeting the human's expectations. One of the challenges here is that the expectations of the human are often hidden and can change dynamically as the human interacts with the robot. However, existing approaches to generating explicable plans often assume that the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to relax this assumption. We apply a Bayesian approach to model and predict dynamic human belief and expectations to make explicable planning more anticipatory. We hypothesize that active explicable plans can be more efficient and explicable at the same time, when compared to explicable plans generated by the existing methods. In our experimental evaluation, we verify that our approach generates more efficient explicable plans while successfully capturing the dynamic belief change of the human teammate.


Interactive Plan Explicability in Human-Robot Teaming

Zakershahrak, Mehrdad, Zhang, Yu

arXiv.org Artificial Intelligence

Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware that its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot cooperation with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where human and robot behaviors can influence each other. We term this new measure as Interactive Plan Explicability. We compare the joint plan generated with the consideration of this measure using the fast forward planner (FF) with the plan created by FF without such consideration, as well as the plan created with actual human subjects. Results indicate that the explicability score of plans generated by our algorithm is comparable to the human plan, and better than the plan created by FF without considering the measure, implying that the plans created by our algorithms align better with expected joint plans of the human during execution. This can lead to more efficient collaboration in practice.


Explicablility as Minimizing Distance from Expected Behavior

Kulkarni, Anagha, Zha, Yantian, Chakraborti, Tathagata, Vadlamudi, Satya Gautam, Zhang, Yu, Kambhampati, Subbarao

arXiv.org Artificial Intelligence

In order to have effective human AI collaboration, it is not simply enough to address the question of autonomy; an equally important question is, how the AI's behavior is being perceived by their human counterparts. When AI agent's task plans are generated without such considerations, they may often demonstrate inexplicable behavior from the human's point of view. This problem arises due to the human's partial or inaccurate understanding of the agent's planning process and/or the model. This may have serious implications on human-AI collaboration, from increased cognitive load and reduced trust in the agent, to more serious concerns of safety in interactions with physical agent. In this paper, we address this issue by modeling the notion of plan explicability as a function of the distance between a plan that agent makes and the plan that human expects it to make. To this end, we learn a distance function based on different plan distance measures that can accurately model this notion of plan explicability, and develop an anytime search algorithm that can use this distance as a heuristic to come up with progressively explicable plans. We evaluate the effectiveness of our approach in a simulated autonomous car domain and a physical service robot domain. We provide empirical evaluations that demonstrate the usefulness of our approach in making the planning process of an autonomous agent conform to human expectations.